On the Proximal Gradient Algorithm with Alternated Inertia
نویسندگان
چکیده
منابع مشابه
The proximal-proximal gradient algorithm
We consider the problem of minimizing a convex objective which is the sum of a smooth part, with Lipschitz continuous gradient, and a nonsmooth part. Inspired by various applications, we focus on the case when the nonsmooth part is a composition of a proper closed convex function P and a nonzero affine map, with the proximal mappings of τP , τ > 0, easy to compute. In this case, a direct applic...
متن کاملProximal gradient algorithm for group sparse optimization
In this paper, we propose a proximal gradient algorithm for solving a general nonconvex and nonsmooth optimization model of minimizing the summation of a C1,1 function and a grouped separable lsc function. This model includes the group sparse optimization via lp,q regularization as a special case. Our algorithmic scheme presents a unified framework for several well-known iterative thresholding ...
متن کاملDecomposable norm minimization with proximal-gradient homotopy algorithm
We study the convergence rate of the proximal-gradient homotopy algorithm applied to normregularized linear least squares problems, for a general class of norms. The homotopy algorithm reduces the regularization parameter in a series of steps, and uses a proximal-gradient algorithm to solve the problem at each step. Proximal-gradient algorithm has a linear rate of convergence given that the obj...
متن کاملOn Perturbed Proximal Gradient Algorithms
We study a version of the proximal gradient algorithm for which the gradient is intractable and is approximated by Monte Carlo methods (and in particular Markov Chain Monte Carlo). We derive conditions on the step size and the Monte Carlo batch size under which convergence is guaranteed: both increasing batch size and constant batch size are considered. We also derive non-asymptotic bounds for ...
متن کاملOn Stochastic Proximal Gradient Algorithms
We study a perturbed version of the proximal gradient algorithm for which the gradient is not known in closed form and should be approximated. We address the convergence and derive a non-asymptotic bound on the convergence rate for the perturbed proximal gradient, a perturbed averaged version of the proximal gradient algorithm and a perturbed version of the fast iterative shrinkagethresholding ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Optimization Theory and Applications
سال: 2018
ISSN: 0022-3239,1573-2878
DOI: 10.1007/s10957-018-1226-4